322 research outputs found

    Safety of medication use in primary care

    Get PDF
    © 2014 Royal Pharmaceutical Society.BACKGROUND: Medication errors are one of the leading causes of harmin health care. Review and analysis of errors have often emphasized their preventable nature and potential for reoccurrence. Of the few error studies conducted in primary care to date, most have focused on evaluating individual parts of the medicines management system. Studying individual parts of the system does not provide a complete perspective and may further weaken the evidence and undermine interventions.AIM AND OBJECTIVES: The aim of this review is to estimate the scale of medication errors as a problem across the medicines management system in primary care. Objectives were: To review studies addressing the rates of medication errors, and To identify studies on interventions to prevent medication errors in primary care.METHODS: A systematic search of the literature was performed in PubMed (MEDLINE), International Pharmaceutical Abstracts (IPA), Embase, PsycINFO, PASCAL, Science Direct, Scopus, Web of Knowledge, and CINAHL PLUS from 1999 to November, 2012. Bibliographies of relevant publications were searched for additional studies.KEY FINDINGS: Thirty-three studies estimating the incidence of medication errors and thirty-six studies evaluating the impact of error-prevention interventions in primary care were reviewed. This review demonstrated that medication errors are common, with error rates between 90%, depending on the part of the system studied, and the definitions and methods used. The prescribing stage is the most susceptible, and that the elderly (over 65 years), and children (under 18 years) are more likely to experience significant errors. Individual interventions demonstrated marginal improvements in medication safety when implemented on their own.CONCLUSION: Targeting the more susceptible population groups and the most dangerous aspects of the system may be a more effective approach to error management and prevention. Co-implementation of existing interventions at points within the system may offer time- and cost-effective options to improving medication safety in primary care.Peer reviewe

    Health-related quality of life in patients with surgically treated lumbar disc herniation: 2- and 7-year follow-up of 117 patients

    Get PDF
    To access publisher full text version of this article. Please click on the hyperlink in Additional Links field.BACKGROUND AND PURPOSE: Health-related quality of life (HRQoL) instruments have been of increasing interest for evaluation of medical treatments over the past 10-15 years. In this prospective, long-term follow-up study we investigated the influence of preoperative factors and the change in HRQoL over time after lumbar disc herniation surgery. METHODS: 117 patients surgically treated for lumbar disc herniation (L4-L5 or L5-S1) were evaluated with a self-completion HRQoL instrument (EQ-5D) preoperatively, after 2 years (96 patients) and after 7 years (89 patients). Baseline data (age, sex, duration of leg pain, surgical level) and degree of leg and back pain (VAS) were obtained preoperatively. The mean age was 39 (18-66) years, 54% were men, and the surgical level was L5-S1 in 58% of the patients. The change in EQ-5D score at the 2-year follow-up was analyzed by testing for correlation and by using a multiple regression model including all baseline factors (age, sex, duration of pain, degree of leg and back pain, and baseline EQ-5D score) as potential predictors. RESULTS: 85% of the patients reported improvement in EQ-5D two years after surgery and this result remained at the long-term follow-up. The mean difference (change) between the preoperative EQ-5D score and the 2-year and 7-year scores was 0.59 (p < 0.001) and 0.62 (p < 0.001), respectively. However, the HRQoL for this patient group did not reach the mean level of previously reported values for a normal population of the same age range at any of the follow-ups. The changes in EQ-5D score between the 2- and 7-year follow-ups were not statistically significant (mean change 0.03, p = 0.2). There was a correlation between baseline leg pain and the change in EQ-5D at the 2-year (r = 0.33, p = 0.002) and 7-year follow-up (r = 0.23, p = 0.04). However, when using regression analysis the only statistically significant predictor for change in EQ-5D was baseline EQ-5D score. INTERPRETATION: Our findings suggest that HRQoL (as measured by EQ-5D) improved 2 years after lumbar disc herniation surgery, but there was no further improvement after 5 more years. Low quality of life and severe leg pain at baseline are important predictors of improvement in quality of life after lumbar disc herniation surgery.Marianne och Marcus Wallenberg Foundation ALF Vastra Gotaland. Gothenburg Medical Association. Swedish Society of Medicine. Felix Neubergh Foundation

    Cost effectiveness of disc prosthesis versus lumbar fusion in patients with chronic low back pain: randomized controlled trial with 2-year follow-up

    Get PDF
    This randomized controlled health economic study assesses the cost-effectiveness of the concept of total disc replacement (TDR) (Charité/Prodisc/Maverick) when compared with the concept of instrumented lumbar fusion (FUS) [posterior lumbar fusion (PLF) /posterior lumbar interbody fusion (PLIF)]. Social and healthcare perspectives after 2 years are reported. In all, 152 patients were randomized to either TDR (n = 80) or lumbar FUS (n = 72). Cost to society (total mean cost/patient, Swedish kronor = SEK, standard deviation) for TDR was SEK 599,560 (400,272), and for lumbar FUS SEK 685,919 (422,903) (ns). The difference was not significant: SEK 86,359 (−45,605 to 214,332). TDR was significantly less costly from a healthcare perspective, SEK 22,996 (1,202 to 43,055). Number of days on sick leave among those who returned to work was 185 (146) in the TDR group, and 252 (189) in the FUS group (ns). Using EQ-5D, the total gain in quality adjusted life years (QALYs) over 2 years was 0.41 units for TDR and 0.40 units for FUS (ns). Based on EQ-5D, the incremental cost-effectiveness ratio (ICER) of using TDR instead of FUS was difficult to analyze due to the “non-difference” in treatment outcome, which is why cost/QALY was not meaningful to define. Using cost-effectiveness probabilistic analysis, the net benefit (with CI) was found to be SEK 91,359 (−73,643 to 249,114) (ns). We used the currency of 2006 where 1 EURO = 9.26 SEK and 1 USD = 7.38 SEK. It was not possible to state whether TDR or FUS is more cost-effective after 2 years. Since disc replacement and lumbar fusion are based on different conceptual approaches, it is important to follow these results over time

    Long-term results after Boston brace treatment in late-onset juvenile and adolescent idiopathic scoliosis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>It is recommended that research in patients with idiopathic scoliosis should focus on short- and long-term patient-centred outcome. The aim of the present study was to evaluate outcome in patients with late-onset juvenile or adolescent idiopathic scoliosis 16 years or more after Boston brace treatment.</p> <p>Methods</p> <p>272 (78%) of 360 patients, 251 (92%) women, responded to follow-up examination at a mean of 24.7 (range 16 - 32) years after Boston brace treatment. Fifty-eight (21%) patients had late-onset juvenile and 214 had adolescent idiopathic scoliosis. All patients had clinical and radiological examination and answered a standardised questionnaire including work status, demographics, General Function Score (GFS) (100 - worst possible) and Oswestry Disability Index (ODI) (100 - worst possible), EuroQol (EQ-5D) (1 - best possible), EQ-VAS (100 - best possible), and Scoliosis Research Society - 22 (SRS - 22) (5 - best possible).</p> <p>Results</p> <p>The mean age at follow-up was 40.4 (31-48) years. The prebrace major curve was in average 33.2 (20 - 57)°. At weaning and at the last follow-up the corresponding values were 28.3 (1 - 58)° and 32.5 (7 - 80)°, respectively. Curve development was similar in patients with late-onset juvenile and adolescent start. The prebrace curve increased > 5° in 31% and decreased > 5° in 26%. Twenty-five patients had surgery. Those who did not attend follow-up (n = 88) had a lower mean curve at weaning: 25.4 (6-53)°. Work status was 76% full-time and 10% part-time. Eighty-seven percent had delivered a baby, 50% had pain in pregnancy. The mean (SD) GFS was 7.4 (10.8), ODI 9.3 (11.0), EQ-5D 0.82 (0.2), EQ-VAS 77.6 (17.8), SRS-22: pain 4.1 (0.8), mental health 4.1 (0.6), self-image 3.7 (0.7), function 4.0 (0.6), satisfaction with treatment 3.7 (1.0). Surgical patients had significantly reduced scores for SRS-physical function and self-image, and patients with curves ≥ 45° had reduced self-image.</p> <p>Conclusion</p> <p>Long-term results were satisfactory in most braced patients and similar in late-onset juvenile and idiopathic adolescent scoliosis.</p

    Electromyographic assessment of muscle fatigue in massive rotator cuff tear

    Get PDF
    Shoulder muscle fatigue has not been assessed in massive rotator cuff tear (MRCT). This study used EMG to measure fatigability of 13 shoulder muscles in 14 healthy controls and 11 patients with MRCT. A hand grip protocol was applied to minimise artifacts due to pain experience during measurement. The fatigue index (median frequency slope) was significantly non-zero (negative) for anterior, middle, and posterior parts of deltoid, supraspinatus and subscapularis muscles in the controls, and for anterior, middle, and posterior parts of deltoid, and pectoralis major in patients (p ≤ 0.001). Fatigue was significantly greater in patients compared to the controls for anterior and middle parts of deltoid and pectoralis major (p ≤ 0.001). A submaximal grip task provided a feasible way to assess shoulder muscle fatigue in MRCT patients, however with some limitations. The results suggest increased activation of deltoid is required to compensate for lost supraspinatus abduction torque. Increased pectoralis major fatigue in patients (adduction torque) likely reflected strategy to stabilise the humeral head against superior subluxing force of the deltoid. Considering physiotherapy as a primary or adjunct intervention for the management of MRCT, the findings of this study generate a base for future clinical studies aiming at the development of evidence-based protocol

    Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Get PDF
    Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs) have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM). Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes

    Differences across health care systems in outcome and cost-utility of surgical and conservative treatment of chronic low back pain: a study protocol

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There is little evidence on differences across health care systems in choice and outcome of the treatment of chronic low back pain (CLBP) with spinal surgery and conservative treatment as the main options. At least six randomised controlled trials comparing these two options have been performed; they show conflicting results without clear-cut evidence for superior effectiveness of any of the evaluated interventions and could not address whether treatment effect varied across patient subgroups. Cost-utility analyses display inconsistent results when comparing surgical and conservative treatment of CLBP. Due to its higher feasibility, we chose to conduct a prospective observational cohort study.</p> <p>Methods</p> <p>This study aims to examine if</p> <p>1. Differences across health care systems result in different treatment outcomes of surgical and conservative treatment of CLBP</p> <p>2. Patient characteristics (work-related, psychological factors, etc.) and co-interventions (physiotherapy, cognitive behavioural therapy, return-to-work programs, etc.) modify the outcome of treatment for CLBP</p> <p>3. Cost-utility in terms of quality-adjusted life years differs between surgical and conservative treatment of CLBP.</p> <p>This study will recruit 1000 patients from orthopaedic spine units, rehabilitation centres, and pain clinics in Switzerland and New Zealand. Effectiveness will be measured by the Oswestry Disability Index (ODI) at baseline and after six months. The change in ODI will be the primary endpoint of this study.</p> <p>Multiple linear regression models will be used, with the change in ODI from baseline to six months as the dependent variable and the type of health care system, type of treatment, patient characteristics, and co-interventions as independent variables. Interactions will be incorporated between type of treatment and different co-interventions and patient characteristics. Cost-utility will be measured with an index based on EQol-5D in combination with cost data.</p> <p>Conclusion</p> <p>This study will provide evidence if differences across health care systems in the outcome of treatment of CLBP exist. It will classify patients with CLBP into different clinical subgroups and help to identify specific target groups who might benefit from specific surgical or conservative interventions. Furthermore, cost-utility differences will be identified for different groups of patients with CLBP. Main results of this study should be replicated in future studies on CLBP.</p

    Carotid Plaque Age Is a Feature of Plaque Stability Inversely Related to Levels of Plasma Insulin

    Get PDF
    C-declination curve (a result of the atomic bomb tests in the 1950s and 1960s) to determine the average biological age of carotid plaques.C content by accelerator mass spectrometry. The average plaque age (i.e. formation time) was 9.6±3.3 years. All but two plaques had formed within 5–15 years before surgery. Plaque age was not associated with the chronological ages of the patients but was inversely related to plasma insulin levels (p = 0.0014). Most plaques were echo-lucent rather than echo-rich (2.24±0.97, range 1–5). However, plaques in the lowest tercile of plaque age (most recently formed) were characterized by further instability with a higher content of lipids and macrophages (67.8±12.4 vs. 50.4±6.2, p = 0.00005; 57.6±26.1 vs. 39.8±25.7, p<0.0005, respectively), less collagen (45.3±6.1 vs. 51.1±9.8, p<0.05), and fewer smooth muscle cells (130±31 vs. 141±21, p<0.05) than plaques in the highest tercile. Microarray analysis of plaques in the lowest tercile also showed increased activity of genes involved in immune responses and oxidative phosphorylation.C, can improve our understanding of carotid plaque stability and therefore risk for clinical complications. Our results also suggest that levels of plasma insulin might be involved in determining carotid plaque age

    The p75 neurotrophin receptor is expressed by adult mouse dentate progenitor cells and regulates neuronal and non-neuronal cell genesis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The ability to regulate neurogenesis in the adult dentate gyrus will require further identification and characterization of the receptors regulating this process. <it>In vitro </it>and <it>in vivo </it>studies have demonstrated that neurotrophins and the p75 neurotrophin receptor (p75<sup>NTR</sup>) can promote neurogenesis; therefore we tested the hypothesis that p75<sup>NTR </sup>is expressed by adult dentate gyrus progenitor cells and is required for their proliferation and differentiation.</p> <p>Results</p> <p>In a first series of studies focusing on proliferation, mice received a single BrdU injection and were sacrificed 2, 10 and 48 hours later. Proliferating, BrdU-positive cells were found to express p75<sup>NTR</sup>. In a second series of studies, BrdU was administered by six daily injections and mice were sacrificed 1 day later. Dentate gyrus sections demonstrated a large proportion of BrdU/p75<sup>NTR </sup>co-expressing cells expressing either the NeuN neuronal or GFAP glial marker, indicating that p75<sup>NTR </sup>expression persists at least until early stages of maturation. In p75<sup>NTR </sup>(-/-) mice, there was a 59% decrease in the number of BrdU-positive cells, with decreases in the number of BrdU cells co-labeled with NeuN, GFAP or neither marker of 35%, 60% and 64%, respectively.</p> <p>Conclusions</p> <p>These findings demonstrate that p75<sup>NTR </sup>is expressed by adult dentate progenitor cells and point to p75<sup>NTR </sup>as an important receptor promoting the proliferation and/or early maturation of not only neural, but also glial and other cell types.</p
    corecore